How can those who live in the light of the day possibly comprehend the depths of the night?
--Nietzsche
Author: Zhen Tong
In this project, we refer to the 2002 paper for HDR image processing. Our contribution includes implementation:
LDR Fusion
Bilateral Filter in numpy
Tone Mapping in YUV space
Given a list of RAW
image of different light exposure, putting them through the first few process of isp, Dead Pixel Correction, Black Level Compensation, Anti Aliasing Filter, and Auto White Balance and Gain Control. Then we can get on the Raw Exposure Fusion process.
jpg
images exposure of different time
Here, the weight
To estimate the true intensity
After we run theget_fusion_weights()
function, we can get the weight for each exposure. From the following graphs, we can observe, most of the pixels are 0 weight, and the pixels with weight
Output Weight as Image:
From the output of the merging LDR Exposurs.png
we can see the 7 given exposure domain was mapped to the HDR scale. The first 7 pictures are the weight value of each pixel
.EXR
FileAfter the merging, using the Malvar Color Filter Array Interpolation and output the rbg
image. Using OpenEXR
, Imath
xdef save_hdr_image_to_exr(rgb_matrix, file_path):
# Convert the RGB matrix to a 32-bit float array
hdr_data = np.array(rgb_matrix, dtype=np.float32)
# Create an OpenEXR image header with the dimensions of the matrix
header = OpenEXR.Header(hdr_data.shape[1], hdr_data.shape[0])
# Set the channels in the header
header['channels'] = {
'R': Imath.Channel(Imath.PixelType(Imath.PixelType.FLOAT)),
'G': Imath.Channel(Imath.PixelType(Imath.PixelType.FLOAT)),
'B': Imath.Channel(Imath.PixelType(Imath.PixelType.FLOAT))
}
# Create an OpenEXR image object
exr_image = OpenEXR.OutputFile(file_path, header)
# Convert the RGB matrix to a planar configuration (separate R, G, B channels)
r_channel = hdr_data[:, :, 0].tobytes()
g_channel = hdr_data[:, :, 1].tobytes()
b_channel = hdr_data[:, :, 2].tobytes()
# Write the channel data to the EXR image
exr_image.writePixels({'R': r_channel, 'G': g_channel, 'B': b_channel})
# Close the EXR image file
exr_image.close()
Fuse seven pictures into one.
If the output HDR image needs to mapped from huge range of domain to
The idea of Tone Mapping is separate the the base and detail of the Y channel in YUV, and using the bilateral filter to remain the detail.
Bilateral Filter is an edge preserving denoising filter,
Using the vectorization of the numpy
computation, we can speed up the computation. We can first get all the indices of the kernel computation locations using np.meshgrid
, and then do the kernel computation all at once.
xxxxxxxxxx
h, w = img.shape[0], img.shape[1]
img_pad = np.pad(img, radius+1)
print(img_pad.shape)
ys, xs = np.meshgrid(np.arange(h), np.arange(w))
P = img_pad[ys+radius, xs+radius]
W = np.zeros_like(img.T)
base = np.zeros_like(img.T)
for i in tqdm(range(2*radius-1)):
for j in tqdm(range(2*radius-1)):
Q = img_pad[ys+i, xs+j]
val = P-Q
gr = 1/(sig1*np.sqrt(2*np.pi)) * np.exp(-0.5*(val**2)/(sig1**2))
dis = np.sqrt((i-radius)**2+(j-radius)**2 )
gs = 1/(sig2*np.sqrt(2*np.pi)) * np.exp(-0.5*(dis**2)/(sig2**2))
grgs = gr*gs
base += grgs*Q
W += grgs
base/=W
According to the paper:
we only use a two scale decomposition, where the “base” image is computed using bilateral filtering, and the detail layer is the division of the input intensity by the base layer.
First we change the output of CFA interpolation directly from RBG
into YUV
and then use the Bilateral Filter to get the base, and get the detail using:
Then, we can get the new Y
channel with:
The
Apply the new Y
channel data, and recompose with the UV
channel, convert it into RGB
, we can get the output.
Then we can go through the rest of the ISP process. (The color of the sky slightly changes)
Compared with Gamma without Tone Mapping:
Using the rest of the ISP pipeline (Edge Enhancement, Brightness/Contrast Control, False Color Suppresion, and Hue/Saturation control), we can get the final output.
The green color on the left is because the trees reflect green light. The yellow on the right is because the stone reflects yellow light. So hdr can save these color details after preserving the texture of the leaves
You should have the file structure like this:
xxxxxxxxxx
/mnt/DDA4310/src$ tree
.
├── data
│ ├── data.rar
│ ├── set01
│ │ ├── DSC00049.ARW
│ │ ├── DSC00049.JPG
│ │ ├── DSC00050.ARW
│ │ ├── DSC00050.JPG
│ │ ├── DSC00051.ARW
│ │ ├── DSC00051.JPG
│ │ ├── DSC00052.ARW
│ │ ├── DSC00052.JPG
│ │ ├── DSC00053.ARW
│ │ └── DSC00053.JPG
│ ├── set02
│ │ ├── DSC00113.ARW
│ │ ├── DSC00113.JPG
│ │ ├── DSC00114.ARW
│ │ ├── DSC00114.JPG
│ │ ├── DSC00115.ARW
│ │ └── DSC00115.JPG
│ └── set03
│ ├── DSC00163.ARW
│ ├── DSC00163.JPG
│ ├── DSC00164.ARW
│ ├── DSC00164.JPG
│ ├── DSC00165.ARW
│ ├── DSC00165.JPG
│ ├── DSC00166.ARW
│ ├── DSC00166.JPG
│ ├── DSC00167.ARW
│ ├── DSC00167.JPG
│ ├── DSC00168.ARW
│ ├── DSC00168.JPG
│ ├── DSC00169.ARW
│ └── DSC00169.JPG
└── src
├── BayerDomainProcessor.py
├── HW03.md
├── HW03.pdf
├── RGBDomainProcessor.py
├── YUVDomainProcessor.py
└── notebook.ipynb
xxxxxxxxxx
cv2
numpy
matplotlib
OpenEXR
Imath
Args | Value | Help |
---|---|---|
--draw-intermediate | Draw the intermediate pictures for the intermediate stage | |
--set-dir | ../data/set03 | |
--output-dir | data | The final output image you want to save |
--itmd-dir | itmd | The intermediate path for stage you want to save |
--fast | store true | Use the faster version of the Bilateral Filter |
--kernel-size | 5 | The kernel size of the BF |
For the status check, we recommend you run the notebook.ipynb
for detail output image of each state.
See the running log at Runing Log
[知乎]亮度响应与HDR基础: https://zhuanlan.zhihu.com/p/23981690
Brown University Project 5: High Dynamic Range: https://cs.brown.edu/courses/cs129/2011/asgn/proj5/
Standford Slides: https://web.stanford.edu/class/ee367/slides/lecture6.pdf
Fr´edo Durand and Julie Dorsey Fast Bilateral Filtering for the Display of High-Dynamic-Range Images. Siggraph 2002: https://stanford.edu/class/ee367/reading/FastBilateralFilteringforHDR.pdf
CMU Lecture Notes: http://graphics.cs.cmu.edu/courses/15-463/2018_fall/lectures/lecture6.pdf
Brown Project Guide: https://cs.brown.edu/courses/csci1290/labs/lab_bilateral/index.html